Text ming

政策法规文本挖掘

1.载入相应分析包

1
2
3
4
library(Rwordseg)
library(rvest)
library(RColorBrewer)
library(wordcloud2)

2.通过网络爬虫获取该网页的文本信息

1
2
3
4
5
web <- read_html(x = "http://www.gov.cn/xinwen/2020-06/30/content_5522993.htm")
words<-web %>% html_nodes("p") %>% html_text()
text<-paste(words,sep = "")
textTemp <- gsub("[0-90123456789< > ~]","",text) ### 剔除数字和特殊字符
data<-unlist(lapply(X=textTemp, FUN=segmentCN)) ### 对获取文本利用NLP分词器分词

3.通过编写函数删去停词

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
stopwords<-unlist(read.table("chineseStopWords.txt",fileEncoding = "GBK",stringsAsFactors=F)) ### 读入中文停词数据
### 删去停词的函数
removeStopWords <- function(x,stopwords) {
temp <- character(0)
index <- 1
xLen <- length(x)
while (index <= xLen) {
if (length(stopwords[stopwords==x[index]]) <1)
temp<- c(temp,x[index])
index <- index +1
}
temp
}

date <-lapply(data,removeStopWords,stopwords) ### 传入多参数进行停词删除
words <- lapply(date,strsplit," ")
wordsNum <- table(unlist(words))
wordsNum <- sort(wordsNum) ### 根据词频排序
wordsData <- data.frame(words =names(wordsNum), freq = wordsNum)

4.通过词云实现文本高频词可视化

1
2
3
4
5
6
7
8
word.top100 <- tail(wordsData,100) 

row.names(word.top100)<-word.top100$words
word.top100<-word.top100[,2:3]
colnames(word.top100)<-c("word","freq")
colors<-brewer.pal(8,"Set2")
wordcloud2(word.top100,size = 2, minRotation = -pi/6, maxRotation = -pi/6,
rotateRatio = 1,fontFamily = "微软雅黑")

关于中央全面深化改革委员会第十四次会议的高频词